0%

待读Universal Transformers

腾讯AI Lab提出翻译改进模型Transformer的3个优化方法

2017 年,谷歌发布了机器学习模型 Transformer,该模型在机器翻译及其他语言理解任务上的表现远远超越了以往算法。今天 2018-08-16,谷歌发布该模型最新版本——Universal Transformer,弥补了在大规模语言理解任务上具有竞争力的实际序列模型与计算通用模型之间的差距,其 BLEU 值比去年的 Transformer 提高了 0.9。在多项有难度的语言理解任务上,Universal Transformer 的泛化效果明显更好,且它在 bAbI 语言推理任务和很有挑战性的 LAMBADA 语言建模任务上达到了新的当前最优性能。

Universal Transformers

Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, Łukasz Kaiser
(Submitted on 10 Jul 2018)

Self-attentive feed-forward sequence models have been shown to achieve impressive results on sequence modeling tasks, thereby presenting a compelling alternative to recurrent neural networks (RNNs) which has remained the de-facto standard architecture for many sequence modeling problems to date. Despite these successes, however, feed-forward sequence models like the Transformer fail to generalize in many tasks that recurrent models handle with ease (e.g. copying when the string lengths exceed those observed at training time). Moreover, and in contrast to RNNs, the Transformer model is not computationally universal, limiting its theoretical expressivity. In this paper we propose the Universal Transformer which addresses these practical and theoretical shortcomings and we show that it leads to improved performance on several tasks. Instead of recurring over the individual symbols of sequences like RNNs, the Universal Transformer repeatedly revises its representations of all symbols in the sequence with each recurrent step. In order to combine information from different parts of a sequence, it employs a self-attention mechanism in every recurrent step. Assuming sufficient memory, its recurrence makes the Universal Transformer computationally universal. We further employ an adaptive computation time (ACT) mechanism to allow the model to dynamically adjust the number of times the representation of each position in a sequence is revised. Beyond saving computation, we show that ACT can improve the accuracy of the model. Our experiments show that on various algorithmic tasks and a diverse set of large-scale language understanding tasks the Universal Transformer generalizes significantly better and outperforms both a vanilla Transformer and an LSTM in machine translation, and achieves a new state of the art on the bAbI linguistic reasoning task and the challenging LAMBADA language modeling task.

自注意力前馈序列模型已被证明在序列建模任务上效果显著,这些任务包括机器翻译 [31]、图像生成 [30] 和 constituency parsing [18],从而提供了可以替代循环神经网络(RNN)的令人信服的方案,尽管 RNN 至今仍是许多序列建模问题事实上的标准架构。然而,尽管取得了这些成功,像 Transformer [31] 这样的前馈序列模型却无法泛化至很多循环模型可以轻松处理的任务上(例如,在字符串或公式长度超过训练时模型观察到的类型时,复制字符串甚至简单的逻辑推断 [28])。此外,与 RNN 相比,Transformer 模型在计算上不通用,这限制了其理论表达能力。本论文提出了 Universal Transformer,它可以解决这些实践和理论缺陷。我们展示了它可以在多项任务中实现性能提升。Universal Transformer 不像 RNN 那样使用对句中单个符号的循环,而是使用每个循环步骤重复修改句子中所有符号的表征。为了结合句子不同部分的信息,该模型在每个循环步中都使用了自注意力机制。假设有充足的记忆,则其循环会使 Universal Transformer 成为计算通用模型。我们进一步使用自适应计算时间(adaptive computation time,ACT)机制,使模型动态调整句子中每个位置的表征被修改的次数。除了节省计算量以外,ACT 还能够提高模型的准确率。我们的实验结果表明,在许多算法任务及大量大规模语言理解任务中,Universal Transformer 的泛化性能大大增强,在机器翻译中的表现超越了基础 Transformer 及 LSTM,在 bAbI 语言推理及富有挑战性的 LAMBADA 语言建模任务中达到了新的当前最优性能。

Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG); Machine Learning (stat.ML)
Cite as: arXiv:1807.03819 [cs.CL]
(or arXiv:1807.03819v1 [cs.CL] for this version)

本站所有文章和源码均免费开放,如您喜欢,可以请我喝杯咖啡